rings. We'll need to add an opaque identifier to ring entries,
allowing matching of requests and responses, but that's about it.
-4. GDT AND LDT VIRTUALISATION
+4. NETWORK CHECKSUM OFFLOAD
+---------------------------
+All the NICs that we support can checksum packets on behalf of guest
+OSes. We need to add appropriate flags to and from each domain to
+indicate, on transmit, which packets need the checksum added and, on
+receive, which packets have been checked out as okay. We can steal
+Linux's interface, which is entirely sane given NIC limitations.
+
+5. GDT AND LDT VIRTUALISATION
-----------------------------
We do not allow modification of the GDT, or any use of the LDT. This
is necessary for support of unmodified applications (eg. Linux uses
/usr/groups/xeno/discussion-docs/memory_management/segment_tables.txt
It's already half implemented, but the rest is still to do.
-5. DOMAIN 0 MANAGEMENT DAEMON
+6. DOMAIN 0 MANAGEMENT DAEMON
-----------------------------
A better control daemon is required for domain 0, which keeps proper
track of machine resources and can make sensible policy choices. This
may require support in Xen; for example, notifications (eg. DOMn is
killed), and requests (eg. can DOMn allocate x frames of memory?).
-6. ACCURATE TIMERS AND WALL-CLOCK TIME
+7. ACCURATE TIMERS AND WALL-CLOCK TIME
--------------------------------------
Currently our long-term timebase free runs on CPU0, with no external
calibration. We should run ntpd on domain 0 and allow this to warp
not worry about relative drift (since they'll all get sync'ed
periodically by ntp).
-7. NEW DESIGN FEATURES
+8. NEW DESIGN FEATURES
----------------------
This includes the last-chance page cache, and the unified buffer cache.
unsigned long i;
unsigned long flags;
- /* POLICY DECISION: Each domain has a page limit. */
- if( (p->tot_pages + bop.size) > p->max_pages )
+ /*
+ * POLICY DECISION: Each domain has a page limit.
+ * NB. The first part of test is because bop.size could be so big that
+ * tot_pages + bop.size overflows a u_long.
+ */
+ if( (bop.size > p->max_pages) ||
+ ((p->tot_pages + bop.size) > p->max_pages) )
return -ENOMEM;
- if ( free_pfns < bop.size )
- return -ENOMEM;
-
spin_lock_irqsave(&free_list_lock, flags);
+
+ if ( free_pfns < (bop.size + (SLACK_DOMAIN_MEM_KILOBYTES <<
+ (PAGE_SHIFT-10))) )
+ {
+ spin_unlock_irqrestore(&free_list_lock, flags);
+ return -ENOMEM;
+ }
+
spin_lock(&p->page_lock);
temp = free_list.next;
spin_lock_irqsave(&free_list_lock, flags);
/* is there enough mem to serve the request? */
- if ( req_pages > free_pfns ) return -1;
-
+ if ( (req_pages + (SLACK_DOMAIN_MEM_KILOBYTES << (PAGE_SHIFT-10))) >
+ free_pfns )
+ {
+ spin_unlock_irqrestore(&free_list_lock, flags);
+ return -1;
+ }
+
/* allocate pages and build a thread through frame_table */
temp = free_list.next;
for ( alloc_pfns = 0; alloc_pfns < req_pages; alloc_pfns++ )
#define IOREMAP_VIRT_START (MAPCACHE_VIRT_END)
#define IOREMAP_VIRT_END (IOREMAP_VIRT_START + (4*1024*1024))
+/*
+ * Amount of slack domain memory to leave in system, in megabytes.
+ * Prevents a hard out-of-memory crunch for thinsg like network receive.
+ */
+#define SLACK_DOMAIN_MEM_KILOBYTES 1024
+
/* Linkage for x86 */
#define FASTCALL(x) x __attribute__((regparm(3)))
#define asmlinkage __attribute__((regparm(0)))